13 research outputs found

    Radio resource scheduling and smart antennas in cellular CDMA communication systems

    Get PDF
    This thesis discusses two important subjects in multi-user wireless communication systems, radio resource scheduler (RRS) and smart antenna. RRS optimizes the available resources among users to increase the capacity and enhance the system performance. The RRS optimization procedure is based on the network conditions (link gain, interference, 
) and the required quality of service (QoS) of each user. The CDMA system capacity and performance can be greatly enhanced by reducing the interferences. One of the techniques to reduce the interferences is by exploiting the spatial structure of the interferences. This could be done by using smart antennas which are the second subject of this thesis. The joining procedures of the smart antennas and RRS are discussed as well. Multi-Objective optimization approach is proposed to solve the radio resource scheduler problems. New algorithms are derived namely the Multi-Objective Distributed Power Control (MODPC) algorithm, Multi-Objective Distributed Power and Rate Control (MODPRC) algorithm, and Maximum Throughput and Minimum Power Control (MTMPC) algorithm. Other modified versions of these algorithms have been obtained such as Multi-Objective Totally Distributed Power and Rate Control (MOTDPRC) algorithm, which can be used when only one-bit quantized Carrier to Interference Ratio (CIR) is available. Kalman filter is proposed as a second technique to solve the RRS problem. The motivation to use Kalman filter is the known fact that Kalman filter is the optimum linear tracking device on the basis of second order statistics. The RRS is formulated in state space form. Two different formulations are introduced. New simple and efficient estimation of the CIR is presented. The method is used to construct a novel power control algorithm called Estimated Step Power Control (ESPC) algorithm. The smart antenna concepts and algorithms are discussed. New adaptation algorithm is proposed. It is called General Minimum Variance Distortionless Response (GMVDR) algorithm. The joining of MIMO smart antennas and radio resource scheduler is investigated. Kalman filter is suggested as a simple algorithm to join smart antenna and multi-rate power control in a new way. The performance of the RRS of CDMA cellular communication systems in the presence of smart antenna is studied.reviewe

    Improving Precision GNSS Positioning and Navigation Accuracy on Smartphones using Machine Learning

    Get PDF
    In this work, we developed a precision positioning algorithm for multi-constellation dual-frequency global navigation satellite systems (GNSS) receivers that predicts the latitude and longitude from smartphone GNSS data. Estimation for all epochs that have at least four valid GNSS observations is generated. Receivers (especially low-cost receivers) often have limited channels and computational resources, therefore, the complexity of the algorithm used in them needs to be kept low. The datasets and results in this paper are based on the data provided by Google under the session "High Precision GNSS Positioning on Smartphones Challenge" in the Institute of Navigation (ION GNSS+ 2021) conference. We began by exploring and analysing the raw GNSS data which includes the training dataset and its ground truth and the test dataset without the ground truth. This analysis gave insight into the nature and correlation of the dataset and helped shape the algorithm that was proposed for the accuracy improvement problem. The design of the algorithm was done using data science techniques to compute the average of the predictions of several devices data in the same collection (training dataset baseline coordinates and their ground truth) and then the data was used to train a few selected machine learning algorithms namely, Linear Regression (LR), Bayesian Ridge (BR) and Neural Network (NN) to predict the offset of the test data baseline coordinates from the expected ground-truth (which was not provided). A simple weighted average (SWA) which combines all the previous three ML technique was also implemented. The results showed improvement in the position accuracy with the simple weighted average (SWA) method having the best accuracy followed by Bayesian Ridge (BR), Linear Regression (LR), and then Neural Network (NN) respectively.©2021 The Author. Published by ION.fi=vertaisarvioimaton|en=nonPeerReviewed

    Optimal power allocation in multi-hop cooperative network using non-regenerative relaying protocol

    Full text link
    Abstract — Cooperative transmission is one of the promising techniques in wireless communication systems that enables the cooperating node in a wireless sensor network to share their radio resources by employing a distributed transmission and processing operation. This technique offers significant diversity gains as several cooperating nodes forward source node’s data to the destination node over independently fading channels. The benefits offered by cooperative transmission can only be exploited fully if the power is allocated between source and cooperating nodes in an optimal manner instead of equal power allocation (EPA). Therefore, in this paper using moment generating function (MGF) approach a closed-form expression of probability of error has been derived for multi-hop cooperative network employing amplify-and-forward (AF) over rayleigh fading channel. Moreover, using two different network scenarios, optimal power allocation (OPA) scheme has been further investigated on the basis of channel link qualities between the communicating nodes. Numerical and simulation results validate the performance improvement of OPA over EPA and further the improvement due to relay location in the cooperative network. Keywords—Cooperative transmission, amplify-and-forward, maximal ratio combining, optimal power allocation. moment generation function I

    Cancer Modeling-on-a-Chip with Future Artificial Intelligence Integration

    Get PDF
    Cancer is one of the leading causes of death worldwide, despite the large efforts to improve the understanding of cancer biology and development of treatments. The attempts to improve cancer treatment are limited by the complexity of the local milieu in which cancer cells exist. The tumor microenvironment (TME) consists of a diverse population of tumor cells and stromal cells with immune constituents, microvasculature, extracellular matrix components, and gradients of oxygen, nutrients, and growth factors. The TME is not recapitulated in traditional models used in cancer investigation, limiting the translation of preliminary findings to clinical practice. Advances in 3D cell culture, tissue engineering, and microfluidics have led to the development of “cancer‐on‐a‐chip” platforms that expand the ability to model the TME in vitro and allow for high‐throughput analysis. The advances in the development of cancer‐on‐a‐chip platforms, implications for drug development, challenges to leveraging this technology for improved cancer treatment, and future integration with artificial intelligence for improved predictive drug screening models are discussed.fi=vertaisarvioitu|en=peerReviewed

    Symbol‐multicast mutual coding for massive MIMO broadcasting

    No full text

    Machine Learning Utilization in GNSS : Use Cases, Challenges and Future Applications

    No full text
    The algorithms and models of traditional global navigation satellite systems (GNSSs) perform very well in terms of the availability and accuracy of positioning, navigation and timing (PNT) under good signal conditions. Research is still ongoing to improve their robustness and performance in less than optimal signal environments. A growing interest in the study of machine learning (ML) and the potential for its application in many fields has also led to several types of research on its utilization in GNSSs. In the field of GNSSs, ML is changing the ways that navigation problems are prevented and resolved, and it is taking on a significant role in advancing PNT technologies for the future. We illustrate this point by reviewing how ML can enhance GNSS performance and usability and also discuss areas of GNSSs in which ML algorithms have been applied. We also highlight the commonly implemented ML algorithms and compare their performance when used in similar GNSS use cases. In addition, the challenges and risks of the utilization of ML techniques in GNSSs are discussed. Insight is given into prospective areas in GNSSs in which ML can be applied for increased performance, accuracy and robustness, thereby providing fertile ground for novel research.©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.fi=vertaisarvioitu|en=peerReviewed

    A Systematic Review of Machine Learning Techniques for GNSS Use Cases

    Get PDF
    In terms of the availability and accuracy of positioning, navigation, and timing (PNT), the traditional Global Navigation Satellite System (GNSS) algorithms and models perform well under good signal conditions. In order to improve their robustness and performance in less than optimal signal environments, many researchers have proposed machine learning (ML) based GNSS models (ML models) as early as the 1990s. However, no study has been done in a systematic way to analyze the extent of the research on the utilization of ML models in GNSS and their performance. In this study, we perform a systematic review of studies from 2000 to 2021 in the literature that utilizes machine learning techniques in GNSS use cases. We assess the performance of the machine learning techniques in the existing literature on their application to GNSS. Furthermore, the strengths and weaknesses of machine learning techniques are summarized. In this paper, we have identified 213 selected studies and ten categories of machine learning techniques. The results prove the acceptable performance of machine learning techniques in several GNSS use cases. In most cases, the models using the machine learning techniques in these GNSS use cases outperform the traditional GNSS models. ML models are promising in their utilization in GNSS. However, the application of ML models in the industry is still limited. More effort and incentives are needed to facilitate the utilization of ML models in the PNT context. Therefore, based on the findings of this review, we provide recommendations for researchers and guidelines for practitioners

    Application of Machine Learning to GNSS/IMU Integration for High Precision Positioning on Smartphone

    No full text
    This paper describes our solution for the Google smartphone decimeter challenge (GSDC), which was held from May to August 2022. The GSDC is a competition for improving positioning accuracy of smartphones. The global navigation satellite system (GNSS) data from smartphones have lower signal levels and higher noise in GNSS observations compared to commercial GNSS receivers. Therefore, it is difficult to directly apply the existing GNSS high-precision positioning methods like precise point positioning (PPP) and real-time kinematic (RTK). The smartphones used to collect the raw GNSS data have multi-constellation, dual-frequency GNSS receivers, and Inertial Measurement Unit (IMU) sensors. Multi-sensor fusion technology has become very prominent for seamless navigation systems due to its complementary capabilities to GNSS positioning. In this work, we developed a machine learning (ML) based adaptive positioning approach to estimate the positions of the smartphone by utilizing post-processed kinematic (PPK) precise positioning techniques to process the GNSS datasets. The ML model is used to predict the driving paths (highways, tree-lined streets, or downtown areas). Depending on the predicted driving path, PPK technique uses the carrier phase to compute the user position using differential corrections from known GNSS base stations. We then use of the Rauch–Tung–Striebel (RTS) smoother, which consists of a forward pass Kalman Filter (KF) and a backward recursion smoother to achieve a loosely coupled integration of GNSS and IMU measurements for positioning estimation of the smartphone. We refer to this method as LC-GNSS/IMU/ML using ML based adaptive positioning (MAP) real-time kinematic (RTK) post-processing algorithm (MAP RTK). This method is validated using reference data from GNSS survey-grade receivers provided with the training datasets. The final validation of this proposed method is done on Kaggle.com, the host of the GSDC competition. Using the proposed method, we estimated the location of the smartphone and tackled the competition. The final public score was 2.61 m, while the final private score was 2.29 m.© 2022 The Authors. Published by the Institute of Navigation.fi=vertaisarvioitu|en=peerReviewed
    corecore